探索Kubernetes与AI的结合:PyTorch训练任务在k8s上调度实践 您所在的位置:网站首页 docker gpus all 转为 k8s 探索Kubernetes与AI的结合:PyTorch训练任务在k8s上调度实践

探索Kubernetes与AI的结合:PyTorch训练任务在k8s上调度实践

2024-06-20 12:02| 来源: 网络整理| 查看: 265

概述

Kubernetes的核心优势在于其能够提供一个可扩展、灵活且高度可配置的平台,使得应用程序的部署、扩展和管理变得前所未有的简单。通用计算能力方面的应用已经相对成熟,云原生化的应用程序、数据库和其他服务可以轻松部署在Kubernetes环境中,实现高可用性和弹性。

然而,当涉及到异构计算资源时,情形便开始变得复杂。异构计算资源如GPU、FPGA和NPU,虽然能够提供巨大的计算优势,尤其是在处理特定类型的计算密集型任务时,但它们的集成和管理却不像通用计算资源那样简单。由于硬件供应商提供的驱动和管理工具差异较大,Kubernetes在统一调度和编排这些资源方面还存在一些局限性。这不仅影响了资源的利用效率,也给开发者带来了额外的管理负担。

下面分享下如何在个人笔记本电脑上完成K8s GPU集群的搭建,并使用kueue、kubeflow、karmada在具有GPU节点的k8s集群上提交pytorch的训练任务。

k8s支持GPUkubernetes对于GPU的支持是通过设备插件的方式来实现,需要安装GPU厂商的设备驱动,通过POD调用GPU能力。Kind、Minikube、K3d等常用开发环境集群构建工具对于GPU的支持也各不相同,Kind暂不支持GPU,Minikube和K3d支持Linux环境下的NVIDIA的GPURTX3060搭建具有GPU的K8sGPU K8s先决条件Go 版本 v1.20+kubectl 版本 v1.19+Minikube 版本 v1.24.0+Docker 版本v24.0.6+NVIDIA Driver 最新版本NVIDIA Container Toolkit 最新版本

备注:

ubuntu 系统的 RTX3060+显卡(不能是虚拟机系统,除非你的虚拟机支持pve或则esxi显卡直通功能), windows的wsl 是不支持的,因为wsl的Linux内核是一个自定义的内核,里面缺失很多内核模块,导致NVIDIA的驱动调用有问题需要Github、Google、Docker的代码和仓库访问能力GPU Docker

完成以上操作后,确认Docker具备GPU的调度能力,可以通过如下的方式来进行验证

创建如下的docker compose 文件services: test: image: nvidia/cuda:12.3.1-base-ubuntu20.04 command: nvidia-smi deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu]使用Docker启动cuda任务docker compose up Creating network "gpu_default" with the default driver Creating gpu_test_1 ... done Attaching to gpu_test_1 test_1 | +-----------------------------------------------------------------------------+ test_1 | | NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.1 | test_1 | |-------------------------------+----------------------+----------------------+ test_1 | | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | test_1 | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | test_1 | | | | MIG M. | test_1 | |===============================+======================+======================| test_1 | | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | test_1 | | N/A 23C P8 9W / 70W | 0MiB / 15109MiB | 0% Default | test_1 | | | | N/A | test_1 | +-------------------------------+----------------------+----------------------+ test_1 | test_1 | +-----------------------------------------------------------------------------+ test_1 | | Processes: | test_1 | | GPU GI CI PID Type Process name GPU Memory | test_1 | | ID ID Usage | test_1 | |=============================================================================| test_1 | | No running processes found | test_1 | +-----------------------------------------------------------------------------+ gpu_test_1 exited with code 0GPU Minikube

配置Minikube,启动kubernetes集群

minikube start --driver docker --container-runtime docker --gpus all

验证集群的GPU能力

确认节点具备GPU信息kubectl describe node minikube ... Capacity: nvidia.com/gpu: 1 ...测试在集群中执行CUDA$ cat LocalQueue->Node实现不同层级的资源共享已支持AI、ML等Ray相关的job在k8s集群中调度。 在kueue中区分管理员用户和普通用户,管理员用户负责管理ResourceFlavor、ClusterQueue、LocalQueue等资源,以及管理资源池的配额(quota)。普通用户负责提批处理任务或者各类的Ray任务。

运行PyTorch训练任务安装kueue

需要k8s 1.22+,使用如下的命令安装

kubectl apply --server-side -f https://github.com/kubernetes-sigs/kueue/releases/download/v0.6.0/manifests.yaml配置集群配额git clone https://github.com/kubernetes-sigs/kueue.git && cd kueue kubectl apply -f examples/admin/single-clusterqueue-setup.yaml

其实不安装kueue也是能够提交Pytorch的训练任务,因为这个PytorchJob是kubeflow traning-operator的一个CRD,但是安装kueue的好处是,他可以支持更多任务。

除了kubeflow的任务,还可以支持kuberay的任务,并且它内置了管理员角色,方便对于集群的配置和集群的资源做限额和管理,支持优先级队列和任务抢占,更好的支持AI、ML等任务的调度和管理。上面安装的集群配额就是设置任务的限制,避免一些负载过高的任务提交,在任务执行前快速失败。

安装kubeflow的training-operatorkubectl apply -k "github.com/kubeflow/training-operator/manifests/overlays/standalone"运行FashionMNIST的训练任务

FashionMNIST 数据集是一个用于图像分类任务的常用数据集,类似于经典的 MNIST 数据集,但是它包含了更加复杂的服装类别。

FashionMNIST 数据集包含了 10 个类别的服装图像,每个类别包含了 6,000 张训练图像和 1,000 张测试图像,共计 60,000 张训练图像和 10,000 张测试图像。每张图像都是 28x28 像素的灰度图像,表示了不同类型的服装,如 T 恤、裤子、衬衫、裙子等。

在kueue上提交PyTorchJob类型的任务,为了能够保存训练过程中的日志和结果,我们需要使用openebs的hostpath来将训练过程的数据保存到节点上,因为任务训练结束后,不能登录到节点查看。所以创建如下的资源文件

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pytorch-results-pvc spec: storageClassName: openebs-hostpath accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: "kubeflow.org/v1" kind: PyTorchJob metadata: name: pytorch-simple namespace: kubeflow spec: pytorchReplicaSpecs: Master: replicas: 1 restartPolicy: OnFailure template: spec: containers: - name: pytorch image: pytorch-mnist:2.2.1-cuda12.1-cudnn8-runtime imagePullPolicy: IfNotPresent command: - "python3" - "/opt/pytorch-mnist/mnist.py" - "--epochs=10" - "--batch-size" - "32" - "--test-batch-size" - "64" - "--lr" - "0.01" - "--momentum" - "0.9" - "--log-interval" - "10" - "--save-model" - "--log-path" - "/results/master.log" volumeMounts: - name: result-volume mountPath: /results volumes: - name: result-volume persistentVolumeClaim: claimName: pytorch-results-pvc Worker: replicas: 1 restartPolicy: OnFailure template: spec: containers: - name: pytorch image: pytorch-mnist:2.2.1-cuda12.1-cudnn8-runtime imagePullPolicy: IfNotPresent command: - "python3" - "/opt/pytorch-mnist/mnist.py" - "--epochs=10" - "--batch-size" - "32" - "--test-batch-size" - "64" - "--lr" - "0.01" - "--momentum" - "0.9" - "--log-interval" - "10" - "--save-model" - "--log-path" - "/results/worker.log" volumeMounts: - name: result-volume mountPath: /results volumes: - name: result-volume persistentVolumeClaim: claimName: pytorch-results-pvc

其中pytorch-mnist:v1beta1-45c5727是一个在pytorch上运行CNN训练任务的代码,具体的代码如下:

from __future__ import print_function import argparse import logging import os from torchvision import datasets, transforms import torch import torch.distributed as dist import torch.nn as nn import torch.nn.functional as F import torch.optim as optim WORLD_SIZE = int(os.environ.get("WORLD_SIZE", 1)) class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5, 1) self.conv2 = nn.Conv2d(20, 50, 5, 1) self.fc1 = nn.Linear(4*4*50, 500) self.fc2 = nn.Linear(500, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(x, 2, 2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, 2, 2) x = x.view(-1, 4*4*50) x = F.relu(self.fc1(x)) x = self.fc2(x) return F.log_softmax(x, dim=1) def train(args, model, device, train_loader, optimizer, epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % args.log_interval == 0: msg = "Train Epoch: {} [{}/{} ({:.0f}%)]\tloss={:.4f}".format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item()) logging.info(msg) niter = epoch * len(train_loader) + batch_idx def test(args, model, device, test_loader, epoch): model.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: data, target = data.to(device), target.to(device) output = model(data) test_loss += F.nll_loss(output, target, reductinotallow="sum").item() # sum up batch loss pred = output.max(1, keepdim=True)[1] # get the index of the max log-probability correct += pred.eq(target.view_as(pred)).sum().item() test_loss /= len(test_loader.dataset) logging.info("{{metricName: accuracy, metricValue: {:.4f}}};{{metricName: loss, metricValue: {:.4f}}}\n".format( float(correct) / len(test_loader.dataset), test_loss)) def should_distribute(): return dist.is_available() and WORLD_SIZE > 1 def is_distributed(): return dist.is_available() and dist.is_initialized() def main(): # Training settings parser = argparse.ArgumentParser(descriptinotallow="PyTorch MNIST Example") parser.add_argument("--batch-size", type=int, default=64, metavar="N", help="input batch size for training (default: 64)") parser.add_argument("--test-batch-size", type=int, default=1000, metavar="N", help="input batch size for testing (default: 1000)") parser.add_argument("--epochs", type=int, default=10, metavar="N", help="number of epochs to train (default: 10)") parser.add_argument("--lr", type=float, default=0.01, metavar="LR", help="learning rate (default: 0.01)") parser.add_argument("--momentum", type=float, default=0.5, metavar="M", help="SGD momentum (default: 0.5)") parser.add_argument("--no-cuda", actinotallow="store_true", default=False, help="disables CUDA training") parser.add_argument("--seed", type=int, default=1, metavar="S", help="random seed (default: 1)") parser.add_argument("--log-interval", type=int, default=10, metavar="N", help="how many batches to wait before logging training status") parser.add_argument("--log-path", type=str, default="", help="Path to save logs. Print to StdOut if log-path is not set") parser.add_argument("--save-model", actinotallow="store_true", default=False, help="For Saving the current Model") if dist.is_available(): parser.add_argument("--backend", type=str, help="Distributed backend", choices=[dist.Backend.GLOO, dist.Backend.NCCL, dist.Backend.MPI], default=dist.Backend.GLOO) args = parser.parse_args() # Use this format (%Y-%m-%dT%H:%M:%SZ) to record timestamp of the metrics. # If log_path is empty print log to StdOut, otherwise print log to the file. if args.log_path == "": logging.basicConfig( format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%dT%H:%M:%SZ", level=logging.DEBUG) else: logging.basicConfig( format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%dT%H:%M:%SZ", level=logging.DEBUG, filename=args.log_path) use_cuda = not args.no_cuda and torch.cuda.is_available() if use_cuda: print("Using CUDA") torch.manual_seed(args.seed) device = torch.device("cuda" if use_cuda else "cpu") if should_distribute(): print("Using distributed PyTorch with {} backend".format(args.backend)) dist.init_process_group(backend=args.backend) kwargs = {"num_workers": 1, "pin_memory": True} if use_cuda else {} train_loader = torch.utils.data.DataLoader( datasets.FashionMNIST("./data", train=True, download=True, transform=transforms.Compose([ transforms.ToTensor() ])), batch_size=args.batch_size, shuffle=True, **kwargs) test_loader = torch.utils.data.DataLoader( datasets.FashionMNIST("./data", train=False, transform=transforms.Compose([ transforms.ToTensor() ])), batch_size=args.test_batch_size, shuffle=False, **kwargs) model = Net().to(device) if is_distributed(): Distributor = nn.parallel.DistributedDataParallel if use_cuda \ else nn.parallel.DistributedDataParallelCPU model = Distributor(model) optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum) for epoch in range(1, args.epochs + 1): train(args, model, device, train_loader, optimizer, epoch) test(args, model, device, test_loader, epoch) if (args.save_model): torch.save(model.state_dict(), "mnist_cnn.pt") if __name__ == "__main__": main()

将训练任务提交到k8s集群

kubectl apply -f sample-pytorchjob.yaml

提交成功后会出现两个训练任务,分别是master和worker的训练任务,如下:

➜ ~ kubectl get po NAME READY STATUS RESTARTS AGE pytorch-simple-master-0 1/1 Running 0 5m5s pytorch-simple-worker-0 1/1 Running 0 5m5s

再查看宿主机的显卡运行情况,发现能够明显听到集群散热的声音,运行nvida-smi可以看到有两个Python任务在执行,等待执行完后,会生成模型文件mnist_cnn.pt。

➜ ~ nvidia-smi Mon Mar 4 10:18:39 2024 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.161.07 Driver Version: 535.161.07 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA GeForce RTX 3060 ... Off | 00000000:01:00.0 Off | N/A | | N/A 39C P0 24W / 80W | 753MiB / 6144MiB | 1% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 1674 G /usr/lib/xorg/Xorg 219MiB | | 0 N/A N/A 1961 G /usr/bin/gnome-shell 47MiB | | 0 N/A N/A 3151 G gnome-control-center 2MiB | | 0 N/A N/A 4177 G ...irefox/3836/usr/lib/firefox/firefox 149MiB | | 0 N/A N/A 14476 C python3 148MiB | | 0 N/A N/A 14998 C python3 170MiB | +---------------------------------------------------------------------------------------+

在提交和执行任务的时候,要注意cuda的版本和pytorch的版本要保持对应,官方demo中的dockerfile是这样的

FROM pytorch/pytorch:1.0-cuda10.0-cudnn7-runtime ADD examples/v1beta1/pytorch-mnist /opt/pytorch-mnist WORKDIR /opt/pytorch-mnist # Add folder for the logs. RUN mkdir /katib RUN chgrp -R 0 /opt/pytorch-mnist \ && chmod -R g+rwX /opt/pytorch-mnist \ && chgrp -R 0 /katib \ && chmod -R g+rwX /katib ENTRYPOINT ["python3", "/opt/pytorch-mnist/mnist.py"]

这个要求你要使用pytorch1.0和cuda10的版本进行训练,而我们实际的使用的cuda12,所以直接用这个基础镜像去构建是不行,任务会一致处于运行中,永远结束不了,为了能够避免每次重复下载mnist的数据集,我们需要提前下载然后将数据集打包到容器里面,所以修改后的Dockerfile如下:

FROM pytorch/pytorch:2.2.1-cuda12.1-cudnn8-runtime ADD . /opt/pytorch-mnist WORKDIR /opt/pytorch-mnist # Add folder for the logs. RUN mkdir /katib RUN chgrp -R 0 /opt/pytorch-mnist \ && chmod -R g+rwX /opt/pytorch-mnist \ && chgrp -R 0 /katib \ && chmod -R g+rwX /katib ENTRYPOINT ["python3", "/opt/pytorch-mnist/mnist.py"]

使用最终的训练结束后mnist_cnn.pt模型文件,进行模型预测和测试得到的结果如下:{metricName: accuracy, metricValue: 0.9039};{metricName: loss, metricValue: 0.2756}, 即这个模型的准确性为90.39%,模型损失值为0.2756,说明我们训练的模型在FashionMNIST 数据集上表现良好,在训练过程中epoch参数比较重要,它代表训练的轮次,过小会出现效果不好,过大会出现过拟合问题,在测试的时候我们可以适当调整这个参数来控制模型训练运行的时间。 

通过kueue通过webhook的方式对于的进行AI、ML等GPU任务进行准入控制和资源限制,提供租户隔离的概念,为k8s对于GPU的支持提供了根据丰富的场景。如果笔记本的显卡能力够强,可以将chatglm等开源的大模型部署到k8s集群中,从而搭建自己个人离线专属的大模型服务。

karmada多集群提交pytorch训练任务创建多集群k8s

在多集群的管控上,我们可以使用karamda来实现管理,其中member2作为控制面主集群,member3、member4作为子集群。在完成minikube的nvidia的GPU配置后,使用如下的命令创建3个集群。

docker network create --driver=bridge --subnet=xxx.xxx.xxx.0/24 --ip-range=xxx.xxx.xxx.0/24 minikube-net minikube start --driver docker --cpus max --memory max --container-runtime docker --gpus all --network minikube-net --subnet='xxx.xxx.xxx.xxx' --mount-string="/run/udev:/run/udev" --mount -p member2 --static-ip='xxx.xxx.xxx.xxx' minikube start --driver docker --cpus max --memory max --container-runtime docker --gpus all --network minikube-net --subnet='xxx.xxx.xxx.xxx' --mount-string="/run/udev:/run/udev" --mount -p member3 --static-ip='xxx.xxx.xxx.xxx' minikube start --driver docker --cpus max --memory max --container-runtime docker --gpus all --network minikube-net --subnet='xxx.xxx.xxx.xxx' --mount-string="/run/udev:/run/udev" --mount -p member4 --static-ip='xxx.xxx.xxx.xxx' ➜ ~ minikube profile list |---------|-----------|---------|-----------------|------|---------|---------|-------|--------| | Profile | VM Driver | Runtime | IP | Port | Version | Status | Nodes | Active | |---------|-----------|---------|-----------------|------|---------|---------|-------|--------| | member2 | docker | docker | xxx.xxx.xxx.xxx | 8443 | v1.28.3 | Running | 1 | | | member3 | docker | docker | xxx.xxx.xxx.xxx | 8443 | v1.28.3 | Running | 1 | | | member4 | docker | docker | xxx.xxx.xxx.xxx | 8443 | v1.28.3 | Running | 1 | | |---------|-----------|---------|-----------------|------|---------|---------|-------|--------|

在3个集群分别安装Training Operator、karmada,并且需要在karmada的控制面安装Training Operator,这样才能在控制面提交pytorchjob的任务。由于同一个pytorch任务分布在不同的集群在服务发现和master、worker交互通信会存在困难,所以我们这边只演示将同一个pytorch任务提交到同一个集群,通过kosmos的控制面实现将多个pytorch任务调度到不同的集群完成训练。 在karmada的控制面上创建训练任务

apiVersion: "kubeflow.org/v1" kind: PyTorchJob metadata: name: pytorch-simple namespace: kubeflow spec: pytorchReplicaSpecs: Master: replicas: 1 restartPolicy: OnFailure template: spec: containers: - name: pytorch image: pytorch-mnist:2.2.1-cuda12.1-cudnn8-runtime imagePullPolicy: IfNotPresent command: - "python3" - "/opt/pytorch-mnist/mnist.py" - "--epochs=30" - "--batch-size" - "32" - "--test-batch-size" - "64" - "--lr" - "0.01" - "--momentum" - "0.9" - "--log-interval" - "10" - "--save-model" - "--log-path" - "/opt/pytorch-mnist/master.log" Worker: replicas: 1 restartPolicy: OnFailure template: spec: containers: - name: pytorch image: pytorch-mnist:2.2.1-cuda12.1-cudnn8-runtime imagePullPolicy: IfNotPresent command: - "python3" - "/opt/pytorch-mnist/mnist.py" - "--epochs=30" - "--batch-size" - "32" - "--test-batch-size" - "64" - "--lr" - "0.01" - "--momentum" - "0.9" - "--log-interval" - "10" - "--save-model" - "--log-path" - "/opt/pytorch-mnist/worker.log"

在karmada的控制面上创建传播策略

apiVersion: policy.karmada.io/v1alpha1 kind: PropagationPolicy metadata: name: pytorchjob-propagation namespace: kubeflow spec: resourceSelectors: - apiVersion: kubeflow.org/v1 kind: PyTorchJob name: pytorch-simple namespace: kubeflow placement: clusterAffinity: clusterNames: - member3 - member4 replicaScheduling: replicaDivisionPreference: Weighted replicaSchedulingType: Divided weightPreference: staticWeightList: - targetCluster: clusterNames: - member3 weight: 1 - targetCluster: clusterNames: - member4 weight: 1

然后我们就可以看到这个训练任务成功的提交到member3和member4的子集群上执行任务

➜ pytorch kubectl karmada --kubeconfig ~/karmada-apiserver.config get po -n kubeflow NAME CLUSTER READY STATUS RESTARTS AGE pytorch-simple-master-0 member3 0/1 Completed 0 7m51s pytorch-simple-worker-0 member3 0/1 Completed 0 7m51s training-operator-64c768746c-gvf9n member3 1/1 Running 0 165m pytorch-simple-master-0 member4 0/1 Completed 0 7m51s pytorch-simple-worker-0 member4 0/1 Completed 0 7m51s training-operator-64c768746c-nrkdv member4 1/1 Running 0 168m总结

通过搭建本地的k8s GPU环境,可以方便的进行AI相关的开发和测试,也能充分利用闲置的笔记本GPU性能。利用kueue、karmada、kuberay和ray等框架,让GPU等异构算力调度在云原生成为可能。目前只是在单k8s集群完成训练任务的提交和运行,在实际AI、ML或者大模型的训练其实更加复杂,组网和技术架构也需要进行精心的设计。要实现千卡、万卡的在k8s集群的训练和推理解决包括但不仅限于

网络通信性能:传统的数据中心网络一般是10Gbps,这个在大模型训练和推理中是捉襟见肘的,所以需要构建RDMA网络(Remote Direct Memory Access)GPU调度和配置:多云多集群场景下,如何进行GPU的调度和管理监控和调试:如何进行有效地监控和调试训练任务,以及对异常情况进行处理和服务恢复参考资料

1. [Go](https://go.dev/)

2. [Docker](https://docker.com)

3. [minikube](https://minikube.sigs.k8s.io/docs/tutorials/nvidia/)

4. [nvidia](https://docs.nvidia.com/datacenter/tesla/tesla-installation-notes/index.html)

5. [kubernetes](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/)

6. [kueue](https://github.com/kubernetes-sigs/kueue)

7. [kubeflow](https://github.com/kubeflow/training-operator)

8. [ray](https://github.com/ray-project/ray)

9. [kuberay](https://github.com/ray-project/kuberay)

10. [karmada](https://github.com/karmada-io/karmada)

11. [kind](https://kind.sigs.k8s.io/)

12. [k3s](https://github.com/k3s-io/k3s)

13. [k3d](https://github.com/k3d-io/k3d)



【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有